📚 node [[convergence|convergence]]
Welcome! Nobody has contributed anything to 'convergence|convergence' yet. You can:
  • Write something in the document below!
    • There is at least one public document in every node in the Agora. Whatever you write in it will be integrated and made available for the next visitor to read and edit.
  • Write to the Agora from social media.
    • If you follow Agora bot on a supported platform and include the wikilink [[convergence|convergence]] in a post, the Agora will link it here and optionally integrate your writing.
  • Sign up as a full Agora user.
    • As a full user you will be able to contribute your personal notes and resources directly to this knowledge commons. Some setup required :)
⥅ related node [[2006 07 26 openid bounties and identity convergence]]
⥅ related node [[convergence]]
⥅ related node [[the agora is a platform for studying convergence]]
⥅ node [[convergence]] pulled by Agora

Convergence

convergence

Go back to the [[AI Glossary]]

Informally, often refers to a state reached during training in which training loss and validation loss change very little or not at all with each iteration after a certain number of iterations. In other words, a model reaches convergence when additional training on the current data will not improve the model. In deep learning, loss values sometimes stay constant or nearly so for many iterations before finally descending, temporarily producing a false sense of convergence.

See also early stopping.

See also Boyd and Vandenberghe, Convex Optimization.

📖 stoas
⥱ context